Amazon FSx for NetApp ONTAPファイルシステム上のiSCSI LUNをマウントしてみた

Amazon FSx for NetApp ONTAPファイルシステム上のiSCSI LUNをマウントしてみた

Amazon FSx for NetApp ONTAPにロマンを感じずにはいられない。
Clock Icon2022.05.19

この記事は公開されてから1年以上経過しています。情報が古い可能性がありますので、ご注意ください。

Amazon FSx for NetApp ONTAPは単純なファイルサーバーじゃないぞ

こんにちは、のんピ(@non____97)です。

皆さんはMulti-AZのEBSボリュームを欲しいなと思ったことはありますか? 私はあります。

EBSボリュームはAZ単位なのでAZ障害のことを考えるとちょっと心配です。かと言って自分でブロックレベルのレプリケーションを実装するのも何だか大変です。

そこで、Amazon FSx for NetApp ONTAPの出番です。

Amazon FSx for NetApp ONTAPはファイルサーバーとしての機能だけではなく、ブロックストレージとしての機能も有しています。

Q: Amazon FSx for NetApp ONTAP はどのプロトコルをサポートしていますか?

A: Amazon FSx for NetApp ONTAP は、ネットワークファイルシステム (NFS) およびサーバーメッセージブロック (SMB) プロトコルのすべてのバージョンで共有ファイルストレージへのアクセスを提供し、また同じデータへのマルチプロトコルアクセス (つまり、NFS と SMB の同時アクセス) もサポートされています。

Amazon FSx for NetApp ONTAP は、iSCSI プロトコルを介した共有ブロックストレージも提供します。

NetApp ONTAP ファイルシステム管理リソース – Amazon Web Services

そのため、Amazon FSx for NetApp ONTAP上でiSCSI LUNを作成して、EC2インスタンスにマウントするといったことも出来ちゃいます。

Amazon FSx for NetApp ONTAPはファイルシステムをMulti-AZ構成にすることもできるので、AZ障害にも対応できる高い可用性をもったブロックストレージとして扱えます。これはロマンを感じずにはいられません。

今回は以下AWS公式ドキュメントに従ってiSCSI LUNをAamzon Linux 2とWindows Serverにマウントしてみます。

Amazon Linux 2にiSCSI LUNをマウントしてみた

iSCSI LUN の作成

以下のようなSVMとボリュームを用意しました。

# SVMの確認
$ aws fsx describe-storage-virtual-machines \
    --storage-virtual-machine-ids "$svm_id"
{
    "StorageVirtualMachines": [
        {
            "ActiveDirectoryConfiguration": {
                "NetBiosName": "SINGLE-AZ-SVM",
                "SelfManagedActiveDirectoryConfiguration": {
                    "DomainName": "fsx-dev.classmethod.jp",
                    "OrganizationalUnitDistinguishedName": "OU=FSxForNetAppONTAP,DC=fsx-dev,DC=classmethod,DC=jp",
                    "UserName": "FSxServiceAccount",
                    "DnsIps": [
                        "10.0.0.138"
                    ]
                }
            },
            "CreationTime": "2022-05-19T00:42:07.541000+00:00",
            "Endpoints": {
                "Iscsi": {
                    "DNSName": "iscsi.svm-0a3a78e7c64ff2c5d.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com",
                    "IpAddresses": [
                        "10.0.10.96",
                        "10.0.10.45"
                    ]
                },
                "Management": {
                    "DNSName": "svm-0a3a78e7c64ff2c5d.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com",
                    "IpAddresses": [
                        "10.0.10.31"
                    ]
                },
                "Nfs": {
                    "DNSName": "svm-0a3a78e7c64ff2c5d.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com",
                    "IpAddresses": [
                        "10.0.10.31"
                    ]
                },
                "Smb": {
                    "DNSName": "SINGLE-AZ-SVM.fsx-dev.classmethod.jp",
                    "IpAddresses": [
                        "10.0.10.31"
                    ]
                }
            },
            "FileSystemId": "fs-0967312eff2f5f5e1",
            "Lifecycle": "CREATING",
            "Name": "classmethod-dev-fsx-netapp-ontap-single-az-svm",
            "ResourceARN": "arn:aws:fsx:ap-northeast-1:<AWSアカウントID>:storage-virtual-machine/fs-0967312eff2f5f5e1/svm-0a3a78e7c64ff2c5d",
            "StorageVirtualMachineId": "svm-0a3a78e7c64ff2c5d",
            "Subtype": "DEFAULT",
            "UUID": "a7e1ed55-d70c-11ec-aeb4-877d41bba405"
        }
    ]
}

# ボリュームの確認
$ aws fsx describe-volumes \
    --volume-ids "$volume_id"
{
    "Volumes": [
        {
            "CreationTime": "2022-05-19T00:49:59.169000+00:00",
            "FileSystemId": "fs-0967312eff2f5f5e1",
            "Lifecycle": "CREATED",
            "Name": "classmethod_dev_fsx_netapp_ontap_single_az_volume_lun",
            "OntapConfiguration": {
                "FlexCacheEndpointType": "NONE",
                "JunctionPath": "/lun",
                "SecurityStyle": "MIXED",
                "SizeInMegabytes": 20480,
                "StorageEfficiencyEnabled": true,
                "StorageVirtualMachineId": "svm-0a3a78e7c64ff2c5d",
                "StorageVirtualMachineRoot": false,
                "TieringPolicy": {
                    "CoolingPeriod": 31,
                    "Name": "AUTO"
                },
                "UUID": "a60c3ae5-d70d-11ec-aeb4-877d41bba405",
                "OntapVolumeType": "RW"
            },
            "ResourceARN": "arn:aws:fsx:ap-northeast-1:<AWSアカウントID>:volume/fs-0967312eff2f5f5e1/fsvol-034753ad216df7904",
            "VolumeId": "fsvol-034753ad216df7904",
            "VolumeType": "ONTAP"
        }
    ]
}

こちらのSVM、ボリューム上にiSCSI LUNを作成します。

SSHAmazon FSx for ONTAPファイルシステムに接続して、NetApp ONTAP CLIを使用します。

$ ssh fsxadmin@management.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com
The authenticity of host 'management.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com (10.0.10.191)' can't be established.
ECDSA key fingerprint is SHA256:96acrCV00KXDxH2lksbcKNEkfPcCji/uzBzcfQ6CrAY.
ECDSA key fingerprint is MD5:71:09:56:e5:24:c1:44:49:28:0d:f4:e0:c6:f2:92:31.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'management.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com,10.0.10.191' (ECDSA) to the list of known hosts.
Password:

This is your first recorded login.
::>

LUNの作成前にLUNの一覧を確認しておきます。

::> lun show
This table is currently empty.

それでは、以下のプロパティを指定してLUNを作成します。

  • svm_name : iSCSIターゲットを提供するSVMの名前
  • vol_name : LUNをホストするボリュームの名前
  • lun_name : LUNに割り当てる名前
  • size : LUN のバイト単位のサイズ
  • ostype : OSの種類

LUNの作成はlun createコマンドで行います。

::> lun create -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001 -size 10240 -ostype linux -space-allocation enabled

Created a LUN of size 10k (10240)

10KBのLUNが作成されました。

もう一度LUNの一覧を確認すると、確かに10KBが作成されていることを確認できます。

::> lun show
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001
                                          online  unmapped linux        10KB

OSのiSCSI設定

LUNの作成が完了したので、続いてAmazon Linux 2でLUNをマウントできるようにiSCSIの設定を行います。

まず、OSが認識しているブロックデバイス一覧を確認しておきます。

$ lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1       259:0    0   8G  0 disk
├─nvme0n1p1   259:1    0   8G  0 part /
└─nvme0n1p128 259:2    0   1M  0 part

1つだけディスクが確認できました。

次に、iscsi-initiator-utilsdevice-mapper-multipathをインストールします。ファイルサーバーがフェイルオーバーしたときに自動で接続先を更新したい場合は、multipathをインストールする必要があります。

$ sudo yum install device-mapper-multipath iscsi-initiator-utils -y
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
amzn2-core                                                                                                                           | 3.7 kB  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package device-mapper-multipath.x86_64 0:0.4.9-127.amzn2 will be installed
--> Processing Dependency: device-mapper-multipath-libs = 0.4.9-127.amzn2 for package: device-mapper-multipath-0.4.9-127.amzn2.x86_64
--> Processing Dependency: libmultipath.so.0()(64bit) for package: device-mapper-multipath-0.4.9-127.amzn2.x86_64
--> Processing Dependency: libmpathpersist.so.0()(64bit) for package: device-mapper-multipath-0.4.9-127.amzn2.x86_64
--> Processing Dependency: libmpathcmd.so.0()(64bit) for package: device-mapper-multipath-0.4.9-127.amzn2.x86_64
---> Package iscsi-initiator-utils.x86_64 0:6.2.0.874-7.amzn2 will be installed
--> Processing Dependency: iscsi-initiator-utils-iscsiuio >= 6.2.0.874-7.amzn2 for package: iscsi-initiator-utils-6.2.0.874-7.amzn2.x86_64
--> Running transaction check
---> Package device-mapper-multipath-libs.x86_64 0:0.4.9-127.amzn2 will be installed
---> Package iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-7.amzn2 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================
 Package                                            Arch                       Version                                 Repository                      Size
============================================================================================================================================================
Installing:
 device-mapper-multipath                            x86_64                     0.4.9-127.amzn2                         amzn2-core                     142 k
 iscsi-initiator-utils                              x86_64                     6.2.0.874-7.amzn2                       amzn2-core                     420 k
Installing for dependencies:
 device-mapper-multipath-libs                       x86_64                     0.4.9-127.amzn2                         amzn2-core                     263 k
 iscsi-initiator-utils-iscsiuio                     x86_64                     6.2.0.874-7.amzn2                       amzn2-core                      90 k

Transaction Summary
============================================================================================================================================================
Install  2 Packages (+2 Dependent packages)

Total download size: 915 k
Installed size: 3.3 M
Downloading packages:
(1/4): device-mapper-multipath-0.4.9-127.amzn2.x86_64.rpm                                                                            | 142 kB  00:00:00
(2/4): device-mapper-multipath-libs-0.4.9-127.amzn2.x86_64.rpm                                                                       | 263 kB  00:00:00
(3/4): iscsi-initiator-utils-iscsiuio-6.2.0.874-7.amzn2.x86_64.rpm                                                                   |  90 kB  00:00:00
(4/4): iscsi-initiator-utils-6.2.0.874-7.amzn2.x86_64.rpm                                                                            | 420 kB  00:00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                       2.9 MB/s | 915 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : iscsi-initiator-utils-6.2.0.874-7.amzn2.x86_64                                                                                           1/4
  Installing : iscsi-initiator-utils-iscsiuio-6.2.0.874-7.amzn2.x86_64                                                                                  2/4
  Installing : device-mapper-multipath-libs-0.4.9-127.amzn2.x86_64                                                                                      3/4
  Installing : device-mapper-multipath-0.4.9-127.amzn2.x86_64                                                                                           4/4
  Verifying  : device-mapper-multipath-libs-0.4.9-127.amzn2.x86_64                                                                                      1/4
  Verifying  : iscsi-initiator-utils-iscsiuio-6.2.0.874-7.amzn2.x86_64                                                                                  2/4
  Verifying  : device-mapper-multipath-0.4.9-127.amzn2.x86_64                                                                                           3/4
  Verifying  : iscsi-initiator-utils-6.2.0.874-7.amzn2.x86_64                                                                                           4/4

Installed:
  device-mapper-multipath.x86_64 0:0.4.9-127.amzn2                             iscsi-initiator-utils.x86_64 0:6.2.0.874-7.amzn2

Dependency Installed:
  device-mapper-multipath-libs.x86_64 0:0.4.9-127.amzn2                      iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-7.amzn2

Complete!

multipathを使用してファイルサーバー間で自動的にフェイルオーバーする際の再接続を高速化するために、タイムアウトをデフォルト値の120から5に変更します。

# デフォルトの設定ファイルの確認
$ sudo cat /etc/iscsi/iscsid.conf
#
# Open-iSCSI default configuration.
# Could be located at /etc/iscsi/iscsid.conf or ~/.iscsid.conf
#
# Note: To set any of these values for a specific node/session run
# the iscsiadm --mode node --op command for the value. See the README
# and man page for iscsiadm for details on the --op command.
#

######################
# iscsid daemon config
######################
# If you want iscsid to start the first time an iscsi tool
# needs to access it, instead of starting it when the init
# scripts run, set the iscsid startup command here. This
# should normally only need to be done by distro package
# maintainers.
#
# Default for Fedora and RHEL. (uncomment to activate).
# Use socket activation, but try to make sure the socket units are listening
iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
#
# Default for upstream open-iscsi scripts (uncomment to activate).
# iscsid.startup = /sbin/iscsid

# Check for active mounts on devices reachable through a session
# and refuse to logout if there are any.  Defaults to "No".
# iscsid.safe_logout = Yes

#############################
# NIC/HBA and driver settings
#############################
# open-iscsi can create a session and bind it to a NIC/HBA.
# To set this up see the example iface config file.

#*****************
# Startup settings
#*****************

# To request that the iscsi initd scripts startup a session set to "automatic".
# node.startup = automatic
#
# To manually startup the session set to "manual". The default is automatic.
node.startup = automatic

# For "automatic" startup nodes, setting this to "Yes" will try logins on each
# available iface until one succeeds, and then stop.  The default "No" will try
# logins on all available ifaces simultaneously.
node.leading_login = No

# *************
# CHAP Settings
# *************

# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
#node.session.auth.authmethod = CHAP

# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
#node.session.auth.username = username
#node.session.auth.password = password

# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#node.session.auth.username_in = username_in
#node.session.auth.password_in = password_in

# To enable CHAP authentication for a discovery session to the target
# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
#discovery.sendtargets.auth.authmethod = CHAP

# To set a discovery session CHAP username and password for the initiator
# authentication by the target(s), uncomment the following lines:
#discovery.sendtargets.auth.username = username
#discovery.sendtargets.auth.password = password

# To set a discovery session CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#discovery.sendtargets.auth.username_in = username_in
#discovery.sendtargets.auth.password_in = password_in

# ********
# Timeouts
# ********
#
# See the iSCSI README's Advanced Configuration section for tips
# on setting timeouts when using multipath or doing root over iSCSI.
#
# To specify the length of time to wait for session re-establishment
# before failing SCSI commands back to the application when running
# the Linux SCSI Layer error handler, edit the line.
# The value is in seconds and the default is 120 seconds.
# Special values:
# - If the value is 0, IO will be failed immediately.
# - If the value is less than 0, IO will remain queued until the session
# is logged back in, or until the user runs the logout command.
node.session.timeo.replacement_timeout = 120

# To specify the time to wait for login to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.login_timeout = 15

# To specify the time to wait for logout to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.logout_timeout = 15

# Time interval to wait for on connection before sending a ping.
node.conn[0].timeo.noop_out_interval = 5

# To specify the time to wait for a Nop-out response before failing
# the connection, edit this line. Failing the connection will
# cause IO to be failed back to the SCSI layer. If using dm-multipath
# this will cause the IO to be failed to the multipath layer.
node.conn[0].timeo.noop_out_timeout = 5

# To specify the time to wait for abort response before
# failing the operation and trying a logical unit reset edit the line.
# The value is in seconds and the default is 15 seconds.
node.session.err_timeo.abort_timeout = 15

# To specify the time to wait for a logical unit response
# before failing the operation and trying session re-establishment
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.lu_reset_timeout = 30

# To specify the time to wait for a target response
# before failing the operation and trying session re-establishment
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.tgt_reset_timeout = 30


#******
# Retry
#******

# To specify the number of times iscsid should retry a login
# if the login attempt fails due to the node.conn[0].timeo.login_timeout
# expiring modify the following line. Note that if the login fails
# quickly (before node.conn[0].timeo.login_timeout fires) because the network
# layer or the target returns an error, iscsid may retry the login more than
# node.session.initial_login_retry_max times.
#
# This retry count along with node.conn[0].timeo.login_timeout
# determines the maximum amount of time iscsid will try to
# establish the initial login. node.session.initial_login_retry_max is
# multiplied by the node.conn[0].timeo.login_timeout to determine the
# maximum amount.
#
# The default node.session.initial_login_retry_max is 8 and
# node.conn[0].timeo.login_timeout is 15 so we have:
#
# node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max =
#                                                               120 seconds
#
# Valid values are any integer value. This only
# affects the initial login. Setting it to a high value can slow
# down the iscsi service startup. Setting it to a low value can
# cause a session to not get logged into, if there are distuptions
# during startup or if the network is not ready at that time.
node.session.initial_login_retry_max = 8

################################
# session and device queue depth
################################

# To control how many commands the session will queue set
# node.session.cmds_max to an integer between 2 and 2048 that is also
# a power of 2. The default is 128.
node.session.cmds_max = 128

# To control the device's queue depth set node.session.queue_depth
# to a value between 1 and 1024. The default is 32.
node.session.queue_depth = 32

##################################
# MISC SYSTEM PERFORMANCE SETTINGS
##################################

# For software iscsi (iscsi_tcp) and iser (ib_iser) each session
# has a thread used to transmit or queue data to the hardware. For
# cxgb3i you will get a thread per host.
#
# Setting the thread's priority to a lower value can lead to higher throughput
# and lower latencies. The lowest value is -20. Setting the priority to
# a higher value, can lead to reduced IO performance, but if you are seeing
# the iscsi or scsi threads dominate the use of the CPU then you may want
# to set this value higher.
#
# Note: For cxgb3i you must set all sessions to the same value, or the
# behavior is not defined.
#
# The default value is -20. The setting must be between -20 and 20.
node.session.xmit_thread_priority = -20


#***************
# iSCSI settings
#***************

# To enable R2T flow control (i.e., the initiator must wait for an R2T
# command before sending any data), uncomment the following line:
#
#node.session.iscsi.InitialR2T = Yes
#
# To disable R2T flow control (i.e., the initiator has an implied
# initial R2T of "FirstBurstLength" at offset 0), uncomment the following line:
#
# The defaults is No.
node.session.iscsi.InitialR2T = No

#
# To disable immediate data (i.e., the initiator does not send
# unsolicited data with the iSCSI command PDU), uncomment the following line:
#
#node.session.iscsi.ImmediateData = No
#
# To enable immediate data (i.e., the initiator sends unsolicited data
# with the iSCSI command packet), uncomment the following line:
#
# The default is Yes
node.session.iscsi.ImmediateData = Yes

# To specify the maximum number of unsolicited data bytes the initiator
# can send in an iSCSI PDU to a target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 262144
node.session.iscsi.FirstBurstLength = 262144

# To specify the maximum SCSI payload that the initiator will negotiate
# with the target for, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the defauls it 16776192
node.session.iscsi.MaxBurstLength = 16776192

# To specify the maximum number of data bytes the initiator can receive
# in an iSCSI PDU from a target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 262144
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144

# To specify the maximum number of data bytes the initiator will send
# in an iSCSI PDU to the target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1).
# Zero is a special case. If set to zero, the initiator will use
# the target's MaxRecvDataSegmentLength for the MaxXmitDataSegmentLength.
# The default is 0.
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0

# To specify the maximum number of data bytes the initiator can receive
# in an iSCSI PDU from a target during a discovery session, edit the
# following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 32768
#
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768

# To allow the targets to control the setting of the digest checking,
# with the initiator requesting a preference of enabling the checking, uncomment
# the following lines (Data digests are not supported.):
#node.conn[0].iscsi.HeaderDigest = CRC32C,None

#
# To allow the targets to control the setting of the digest checking,
# with the initiator requesting a preference of disabling the checking,
# uncomment the following line:
#node.conn[0].iscsi.HeaderDigest = None,CRC32C
#
# To enable CRC32C digest checking for the header and/or data part of
# iSCSI PDUs, uncomment the following line:
#node.conn[0].iscsi.HeaderDigest = CRC32C
#
# To disable digest checking for the header and/or data part of
# iSCSI PDUs, uncomment the following line:
#node.conn[0].iscsi.HeaderDigest = None
#
# The default is to never use DataDigests or HeaderDigests.
#
node.conn[0].iscsi.HeaderDigest = None

# For multipath configurations, you may want more than one session to be
# created on each iface record.  If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 1

#************
# Workarounds
#************

# Some targets like IET prefer after an initiator has sent a task
# management function like an ABORT TASK or LOGICAL UNIT RESET, that
# it does not respond to PDUs like R2Ts. To enable this behavior uncomment
# the following line (The default behavior is Yes):
node.session.iscsi.FastAbort = Yes

# Some targets like Equalogic prefer that after an initiator has sent
# a task management function like an ABORT TASK or LOGICAL UNIT RESET, that
# it continue to respond to R2Ts. To enable this uncomment this line
# node.session.iscsi.FastAbort = No

# To prevent doing automatic scans that would add unwanted luns to the system
# we can disable them and have sessions only do manually requested scans.
# Automatic scans are performed on startup, on login, and on AEN/AER reception
# on devices supporting it.  For HW drivers all sessions will use the value
# defined in the configuration file.  This configuration option is independent
# of scsi_mod scan parameter. (The default behavior is auto):
node.session.scan = auto

# タイムアウト値を5に変更
$ sudo sed -i 's/node.session.timeo.replacement_timeout = .*/node.session.timeo.replacement_timeout = 5/' /etc/iscsi/iscsid.conf

# 変更した結果を確認
$ sudo cat /etc/iscsi/iscsid.conf \
    | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 5

iSCSIサービスを起動します。

# iSCSIサービスの起動
$ sudo systemctl start iscsid

# iSCSIサービスの起動確認
$ sudo systemctl status iscsid.service
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-05-19 01:10:49 UTC; 9s ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 1470 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 1472 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─1471 /usr/sbin/iscsid
           └─1472 /usr/sbin/iscsid

May 19 01:10:49 ip-10-0-0-93.ap-northeast-1.compute.internal systemd[1]: Starting Open-iSCSI...
May 19 01:10:49 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[1470]: iSCSI logger with pid=1471 started!
May 19 01:10:49 ip-10-0-0-93.ap-northeast-1.compute.internal systemd[1]: Failed to parse PID from file /var/run/iscsid.pid: Invalid argument
May 19 01:10:49 ip-10-0-0-93.ap-northeast-1.compute.internal systemd[1]: Started Open-iSCSI.
May 19 01:10:50 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[1471]: iSCSI daemon with pid=1472 started!

次に、クライアントがファイルサーバー間で自動でフェイルオーバーできるようにするために、マルチパスの設定をします。

$ sudo mpathconf --enable --with_multipathd y

最後に、このEC2インスタンスのイニシエーター名を確認します。

$ sudo cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2f3cecbb7216

iSCSIターゲットの設定

続いて、Amazon FSx for ONTAPファイルシステムでiSCSIターゲットの設定を行います。

SSHでAmazon FSx for ONTAPファイルシステムに接続して、NetApp ONTAP CLIを使用します。

$ ssh fsxadmin@management.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com
Password:

Last login time: 5/19/2022 00:57:46

イニシエーターグループ(igroup)を作成します。イニシエーターグループはiSCSI LUNにマッピングし、どのイニシエーターがLUN にアクセスできるかをコントロールするものです。事前に確認したEC2インスタンスのイニシエーター名はここで使用します。

::> lun igroup create -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -igroup igroup_001 -initiator iqn.1994-05.com.redhat:2f3cecbb7216 -protocol iscsi -ostype linux

イニシエーターグループを確認すると、1つイニシエーターグループが作成されていることを確認できました。

::> lun igroup show
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_001   iscsi    linux    iqn.1994-05.com.redhat:2f3cecbb7216

次に、LUNとイニシエーターグループとのマッピングを作成します。

::> lun mapping create -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001 -igroup igroup_001 -lun-id 001

LUNを確認するとマッピングされていることが分かります。

::> lun show -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001 -fields state,mapped,serial-hex
vserver                                        path                                                               serial-hex               state  mapped
---------------------------------------------- ------------------------------------------------------------------ ------------------------ ------ ------
classmethod-dev-fsx-netapp-ontap-single-az-svm /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001 6c574231752b53784865462f online mapped

::> lun show -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001
                                          online  mapped   linux        10KB

最後に、SVMのiscsi_1およびiscsi_2のIPアドレスを確認しておきます。

::> network interface show -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
classmethod-dev-fsx-netapp-ontap-single-az-svm
            iscsi_1      up/up    10.0.10.96/24      -01
                                                                   e0e     true
            iscsi_2      up/up    10.0.10.45/24      -02
                                                                   e0e     true
            nfs_smb_management_1
                         up/up    10.0.10.31/24      -01
                                                                   e0e     true
3 entries were displayed.


Amazon Linux 2にiSCSI LUNをマウント

それでは、Amazon Linux 2にiSCSI LUNをマウントしていきます。

まず、事前に確認したIPアドレスを使って、ターゲットiSCSIノードを検出します。

$ sudo iscsiadm --mode discovery --op update --type sendtargets --portal 10.0.10.96
10.0.10.96:3260,1029 iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
10.0.10.45:3260,1030 iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3

次に、各AZのONTAPノードごとにイニシエータあたり4つのセッションを確立し、EC2インスタンスが帯域幅制限の5Gb/sを超えて最大20Gb/sでiSCSI LUN接続できるようにします。

$ sudo iscsiadm --mode node -T iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3 --op update -n node.session.nr_sessions -v 4

ターゲットiSCSIノードにログインします。作成したiSCSI LUNが使用可能なディスクとして認識されます。

$ sudo iscsiadm --mode node -T iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3 --login
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.

現在のマルチパス設定を確認すると、確かに2ノード × 4セッション存在していることを確認できます。

$ sudo multipath -ll
3600a09806c574231752b53784865462f dm-0 NETAPP  ,LUN C-Mode
size=10K features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 3:0:0:1 sdd     8:48  active ready running
| |- 0:0:0:1 sdb     8:16  active ready running
| |- 2:0:0:1 sdc     8:32  active ready running
| `- 1:0:0:1 sda     8:0   active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 5:0:0:1 sde     8:64  active ready running
  |- 4:0:0:1 sdf     8:80  active ready running
  |- 6:0:0:1 sdg     8:96  active ready running
  `- 7:0:0:1 sdh     8:112 active ready running

lsblkでもブロックデバイスとして認識されていることが分かります。

$ lsblk
NAME                                MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdb                                   8:16   0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdc                                   8:32   0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdd                                   8:48   0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sde                                   8:64   0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdf                                   8:80   0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdg                                   8:96   0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdh                                   8:112  0  10K  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
nvme0n1                             259:0    0   8G  0 disk
├─nvme0n1p1                         259:1    0   8G  0 part  /
└─nvme0n1p128                       259:2    0   1M  0 part

次の手順はディスクのパーティション分割ですが、10KBのディスクを分割しても何だかテンションが上がりません。

そのため、10KBのLUNを10GBに拡張します。

NetApp ONTAP CLIを使用して、まず現在のLUNの一覧を確認しておきます。

::> lun show
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001
                                          online  mapped   linux        10KB

10KBですね。lun resizeで10GBに変更します。

::> lun resize -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001 -size 10G

LUNの一覧を表示すると、10GBに変更されたことを確認できます。

::> lun show
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001
                                          online  mapped   linux        10GB

# NetApp ONTAP CLIの終了
::> exit
Goodbye


Connection to management.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com closed.

LUNのサイズ変更が完了したので、OSからどのように認識されているかを確認します。

現在のマルチパス一覧とブロックデバイス一覧を確認すると、各ディスクは10GBと認識されていますが、各マルチパスデバイスは10KBのままです。

$ sudo multipath -ll
3600a09806c574231752b53784865462f dm-0 NETAPP  ,LUN C-Mode
size=10K features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 3:0:0:1 sdd     8:48  active ready running
| |- 0:0:0:1 sdb     8:16  active ready running
| |- 2:0:0:1 sdc     8:32  active ready running
| `- 1:0:0:1 sda     8:0   active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 5:0:0:1 sde     8:64  active ready running
  |- 4:0:0:1 sdf     8:80  active ready running
  |- 6:0:0:1 sdg     8:96  active ready running
  `- 7:0:0:1 sdh     8:112 active ready running

$ lsblk
NAME                                MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdb                                   8:16   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdc                                   8:32   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdd                                   8:48   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sde                                   8:64   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdf                                   8:80   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdg                                   8:96   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
sdh                                   8:112  0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10K  0 mpath
nvme0n1                             259:0    0   8G  0 disk
├─nvme0n1p1                         259:1    0   8G  0 part  /
└─nvme0n1p128                       259:2    0   1M  0 part

マルチパスサービスを再起動すると、マルチパスデバイスも10GBと認識されました。

$ sudo systemctl restart multipathd.service

$ sudo multipath -ll
3600a09806c574231752b53784865462f dm-0 NETAPP  ,LUN C-Mode
size=10G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:0:1 sdb     8:16  active ready running
| |- 1:0:0:1 sda     8:0   active ready running
| |- 2:0:0:1 sdc     8:32  active ready running
| `- 3:0:0:1 sdd     8:48  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 4:0:0:1 sdf     8:80  active ready running
  |- 5:0:0:1 sde     8:64  active ready running
  |- 6:0:0:1 sdg     8:96  active ready running
  `- 7:0:0:1 sdh     8:112 active ready running

$ lsblk
NAME                                MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sda                                   8:0    0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
sdb                                   8:16   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
sdc                                   8:32   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
sdd                                   8:48   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
sde                                   8:64   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
sdf                                   8:80   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
sdg                                   8:96   0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
sdh                                   8:112  0  10G  0 disk
└─3600a09806c574231752b53784865462f 253:0    0  10G  0 mpath
nvme0n1                             259:0    0   8G  0 disk
├─nvme0n1p1                         259:1    0   8G  0 part  /
└─nvme0n1p128                       259:2    0   1M  0 part

下準備ができたので、パーティション分割します。

パーティション分割の前にディスクが認識されていることを改めて確認します。

$ ls -l /dev/mapper/3600a09806c574231752b53784865462f
lrwxrwxrwx 1 root root      7 May 19 01:58 /dev/mapper/3600a09806c574231752b53784865462f -> ../dm-0
-rwxr-xr-x 1 root root 109288 Jan 23  2020 ls

3600a09806c574231752b53784865462fというディスクについてパーティション分割します。パーティションは2GBと3GBの2つ作成します。

$ sudo fdisk /dev/mapper/3600a09806c574231752b53784865462f

Welcome to fdisk (util-linux 2.30.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5060b99a.

Command (m for help): m

Help:

  DOS (MBR)
   a   toggle a bootable flag
   b   edit nested BSD disklabel
   c   toggle the dos compatibility flag

  Generic
   d   delete a partition
   F   list free unpartitioned space
   l   list known partition types
   n   add a new partition
   p   print the partition table
   t   change a partition type
   v   verify the partition table
   i   print information about a partition

  Misc
   m   print this menu
   u   change display/entry units
   x   extra functionality (experts only)

  Script
   I   load disk layout from sfdisk script file
   O   dump disk layout to sfdisk script file

  Save & Exit
   w   write table to disk and exit
   q   quit without saving changes

  Create a new label
   g   create a new empty GPT partition table
   G   create a new empty SGI (IRIX) partition table
   o   create a new empty DOS partition table
   s   create a new empty Sun partition table


Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 2048
Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): +2G

Created a new partition 1 of type 'Linux' and of size 2 GiB.

Command (m for help): F
Unpartitioned space /dev/mapper/3600a09806c574231752b53784865462f: 8 GiB, 8588886016 bytes, 16775168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes

  Start      End  Sectors Size
4196352 20971519 16775168   8G

Command (m for help): p
Disk /dev/mapper/3600a09806c574231752b53784865462f: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device                                              Boot Start     End Sectors Size Id Type
/dev/mapper/3600a09806c574231752b53784865462f-part1       2048 4196351 4194304   2G 83 Linux

Command (m for help): i
Selected partition 1
         Device: /dev/mapper/3600a09806c574231752b53784865462f-part1
          Start: 2048
            End: 4196351
        Sectors: 4194304
      Cylinders: 2049
           Size: 2G
             Id: 83
           Type: Linux
    Start-C/H/S: 1/0/1
      End-C/H/S: 0/63/32

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2): 2
First sector (4196352-20971519, default 4196352): 4196352
Last sector, +sectors or +size{K,M,G,T,P} (4196352-20971519, default 20971519): +3G

Created a new partition 2 of type 'Linux' and of size 3 GiB.

Command (m for help): F
Unpartitioned space /dev/mapper/3600a09806c574231752b53784865462f: 5 GiB, 5367660544 bytes, 10483712 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes

   Start      End  Sectors Size
10487808 20971519 10483712   5G

Command (m for help): p
Disk /dev/mapper/3600a09806c574231752b53784865462f: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device                                              Boot   Start      End Sectors Size Id Type
/dev/mapper/3600a09806c574231752b53784865462f-part1         2048  4196351 4194304   2G 83 Linux
/dev/mapper/3600a09806c574231752b53784865462f-part2      4196352 10487807 6291456   3G 83 Linux

Command (m for help): i
Partition number (1,2, default 2): 2

         Device: /dev/mapper/3600a09806c574231752b53784865462f-part2
          Start: 4196352
            End: 10487807
        Sectors: 6291456
      Cylinders: 3073
           Size: 3G
             Id: 83
           Type: Linux
    Start-C/H/S: 1/0/1
      End-C/H/S: 0/63/32

Command (m for help): w

The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Invalid argument

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

OSの再起動をしなければ反映されないと言われてしまったので、OSを再起動します。

$ sudo systemctl reboot
Terminated

OS再起動後、ディスクを確認するとパーティション分割されていることが分かります。

$ lsblk
NAME                                   MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sda                                      8:0    0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdb                                      8:16   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdc                                      8:32   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdd                                      8:48   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sde                                      8:64   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdf                                      8:80   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdg                                      8:96   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdh                                      8:112  0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
nvme0n1                                259:0    0   8G  0 disk
├─nvme0n1p1                            259:1    0   8G  0 part  /
└─nvme0n1p128                          259:2    0   1M  0 part

$ sudo fdisk -l
Disk /dev/nvme0n1: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 30B4269A-8501-4012-B8AE-5DBF6CBBACA8

Device           Start      End  Sectors Size Type
/dev/nvme0n1p1    4096 16777182 16773087   8G Linux filesystem
/dev/nvme0n1p128  2048     4095     2048   1M BIOS boot

Partition table entries are not in disk order.


Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sdb1          2048  4196351 4194304   2G 83 Linux
/dev/sdb2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sda1          2048  4196351 4194304   2G 83 Linux
/dev/sda2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sdd1          2048  4196351 4194304   2G 83 Linux
/dev/sdd2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sdc1          2048  4196351 4194304   2G 83 Linux
/dev/sdc2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/sdf: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sdf1          2048  4196351 4194304   2G 83 Linux
/dev/sdf2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sde1          2048  4196351 4194304   2G 83 Linux
/dev/sde2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/sdh: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sdh1          2048  4196351 4194304   2G 83 Linux
/dev/sdh2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/sdg: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device     Boot   Start      End Sectors Size Id Type
/dev/sdg1          2048  4196351 4194304   2G 83 Linux
/dev/sdg2       4196352 10487807 6291456   3G 83 Linux


Disk /dev/mapper/3600a09806c574231752b53784865462f: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x5060b99a

Device                                              Boot   Start      End Sectors Size Id Type
/dev/mapper/3600a09806c574231752b53784865462f-part1         2048  4196351 4194304   2G 83 Linux
/dev/mapper/3600a09806c574231752b53784865462f-part2      4196352 10487807 6291456   3G 83 Linux

各パーティションをext4にフォーマットします。

$ sudo mkfs.ext4 /dev/mapper/3600a09806c574231752b53784865462f1
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=16 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

$ sudo mkfs.ext4 /dev/mapper/3600a09806c574231752b53784865462f2
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=16 blocks
196608 inodes, 786432 blocks
39321 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=805306368
24 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

それでは、いよいよマウントです。

# マウントポイントの作成
$ sudo mkdir -p /lun/part1
$ sudo mkdir /lun/part2

# マウントポイントの確認
$ ls -l /lun
total 0
drwxr-xr-x 2 root root 6 May 19 02:21 part1
drwxr-xr-x 2 root root 6 May 19 02:21 part2

# マウント
$ sudo mount -t ext4 /dev/mapper/3600a09806c574231752b53784865462f1 /lun/part1
$ sudo mount -t ext4 /dev/mapper/3600a09806c574231752b53784865462f2 /lun/part2

# ディスクの空き容量の確認
$ df -h
Filesystem                                      Size  Used Avail Use% Mounted on
devtmpfs                                        462M     0  462M   0% /dev
tmpfs                                           470M     0  470M   0% /dev/shm
tmpfs                                           470M  404K  470M   1% /run
tmpfs                                           470M     0  470M   0% /sys/fs/cgroup
/dev/nvme0n1p1                                  8.0G  1.6G  6.5G  20% /
/dev/mapper/3600a09806c574231752b53784865462f1  2.0G  6.0M  1.8G   1% /lun/part1
/dev/mapper/3600a09806c574231752b53784865462f2  2.9G  9.0M  2.8G   1% /lun/part2

各パーティションをマウントできました。

最後に書き込みできるかを確認します。

# ディレクトリの所有者の変更
$ sudo chown ssm-user:ssm-user /lun/part1
$ sudo chown ssm-user:ssm-user /lun/part2

# ディレクトリの所有者確認
$ ls -l /lun
total 8
drwxr-xr-x 3 ssm-user ssm-user 4096 May 19 02:36 part1
drwxr-xr-x 3 ssm-user ssm-user 4096 May 19 02:35 part2

# 書き込み
$ echo 'Hello world!' > /lun/part1/HelloWorld.txt
$ echo 'Hello world!' > /lun/part2/HelloWorld.txt

# 書き込んだ内容を確認
$ cat /lun/part1/HelloWorld.txt
Hello world!

$ cat /lun/part2/HelloWorld.txt
Hello world!

書き込みできました。やったね。

Windows ServerにiSCSI LUNをマウントしてみた

iSCSIイニシエーターサービスの起動

Amazon Linux 2での検証が完了したので、次にWindows ServerにiSCSI LUNをマウントしてみます。

まず、Windows ServerでiSCSIイニシエーターサービスの起動をします。

# iSCSIイニシエーターサービスの状態確認
> Get-Service MSiSCSI

Status   Name               DisplayName
------   ----               -----------
Stopped  MSiSCSI            Microsoft iSCSI Initiator Service

# iSCSIイニシエーターサービスの起動
> Start-Service MSiSCSI

# iSCSIイニシエーターサービスの状態確認
> Get-Service MSiSCSI

Status   Name               DisplayName
------   ----               -----------
Running  MSiSCSI            Microsoft iSCSI Initiator Service


# iSCSIイニシエーターサービスの自動起動を有効化
> Set-Service -Name msiscsi -StartupType Automatic

このEC2インスタンスのイニシエーター名を取得します。

> (Get-InitiatorPort).NodeAddress
iqn.1991-05.com.microsoft:ec2amaz-69ctq9g

クライアントがファイルサーバー間で自動でフェイルオーバーできるようにMultipath-IOをインストールします。

> Install-WindowsFeature Multipath-IO

Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
True    Yes            SuccessRest... {マルチパス I/O}
警告: インストール処理を完了するには、このサーバーを再起動する必要があります。


インストール後に再起動を促されるので、OSを再起動します。

iSCSI LUNの作成

次にLUNの作成を行います。

NetApp ONTAP CLIを使用して現在のLUNの一覧を確認しておきます。

::> lun show
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001
                                          online  mapped   linux        10GB

Amazon Linux 2での検証で使ったLUNがありますね。

こちらのLUNはTypeがLinuxなので、Windows Server用に新しくLUNを作成します。

LUNを作成するにあたって、現在ボリュームで割り当てられる上限一杯のサイズで作成します。LUNに設定可能な最大サイズを確認します。

::> lun maxsize
                                                 Without   With SS   Complete
Vserver    Volume       Qtree        OS Type  SS Reserve   Reserve SS Reserve
---------- ------------ ------------ -------- ---------- --------- ----------
classmethod-dev-fsx-netapp-ontap-single-az-svm
           classmethod_dev_fsx_netapp_ontap_single_az_volume_lun
                        ""           aix          9.05GB    9.05GB     4.52GB
                                     hpux         9.05GB    9.05GB     4.52GB
                                     hyper_v      9.05GB    9.05GB     4.52GB
                                     linux        9.05GB    9.05GB     4.52GB
                                     netware      9.05GB    9.05GB     4.52GB
                                     openvms      9.05GB    9.05GB     4.52GB
                                     solaris      9.05GB    9.05GB     4.52GB
                                     solaris_efi  9.05GB    9.05GB     4.52GB
                                     vmware       9.05GB    9.05GB     4.52GB
                                     windows      9.05GB    9.05GB     4.52GB
                                     windows_2008 9.05GB    9.05GB     4.52GB
                                     windows_gpt  9.05GB    9.05GB     4.52GB
                                     xen          9.05GB    9.05GB     4.52GB
13 entries were displayed.

9.05GBが最大ということが分かったので、9.05GBのLUNを作成します。

# LUNの作成
::> lun create -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_002 -size 9.05G -ostype windows -space-allocation enabled

Created a LUN of size 9g (9717363507)

# LUNの一覧表示
::> lun show
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_001
                                          online  mapped   linux        10GB
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_002
                                          online  unmapped windows    9.05GB
2 entries were displayed.

# LUNに設定可能な最大サイズの確認
::> lun maxsize
                                                 Without   With SS   Complete
Vserver    Volume       Qtree        OS Type  SS Reserve   Reserve SS Reserve
---------- ------------ ------------ -------- ---------- --------- ----------
classmethod-dev-fsx-netapp-ontap-single-az-svm
           classmethod_dev_fsx_netapp_ontap_single_az_volume_lun
                        ""           aix         11.21MB   11.21MB     5.59MB
                                     hpux        11.21MB   11.21MB     5.59MB
                                     hyper_v     11.21MB   11.21MB     5.59MB
                                     linux       11.21MB   11.21MB     5.59MB
                                     netware     11.21MB   11.21MB     5.59MB
                                     openvms     11.21MB   11.21MB     5.59MB
                                     solaris     11.21MB   11.21MB     5.59MB
                                     solaris_efi 11.21MB   11.21MB     5.59MB
                                     vmware      11.21MB   11.21MB     5.59MB
                                     windows     11.21MB   11.21MB     5.59MB
                                     windows_2008
                                                 11.21MB   11.21MB     5.59MB
                                     windows_gpt 11.21MB   11.21MB     5.59MB
                                     xen         11.21MB   11.21MB     5.59MB
13 entries were displayed.

# ボリュームサイズの確認
::> volume show-space

      Vserver : classmethod-dev-fsx-netapp-ontap-single-az-svm
      Volume  : classmethod_dev_fsx_netapp_ontap_single_az_svm_root

      Feature                                    Used      Used%
      --------------------------------     ----------     ------
      User Data                                  40KB         0%
      Filesystem Metadata                       344KB         0%
      Inodes                                     20KB         0%
      Snapshot Reserve                        51.20MB         5%
      Performance Metadata                      340KB         0%

      Total Used                              51.93MB         5%

      Total Physical Used                      1.55MB         0%


      Vserver : classmethod-dev-fsx-netapp-ontap-single-az-svm
      Volume  : classmethod_dev_fsx_netapp_ontap_single_az_volume_lun

      Feature                                    Used      Used%
      --------------------------------     ----------     ------
      User Data                               18.99GB        95%
      Filesystem Metadata                       460KB         0%
      Inodes                                     20KB         0%
      Snapshot Reserve                            1GB         5%
      Deduplication                              76KB         0%
      Performance Metadata                      260KB         0%

      Total Used                              19.99GB       100%

      Total Physical Used                     20.85MB         0%

2 entries were displayed.

LUNの作成ができたのでイニシエーターグループとマッピングをします。

# イニシエターグループの作成
::> lun igroup create -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -igroup igroup_002 -initiator iqn.1991-05.com.microsoft:ec2amaz-69ctq9g -protocol iscsi -ostype windows

# イニシエーターグループの一覧表示
::> lun igroup show
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_001   iscsi    linux    iqn.1994-05.com.redhat:2f3cecbb7216
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_002   iscsi    windows  iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
2 entries were displayed.


# LUNをイニシエーターグループとマッピング
::> lun mapping create -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_002 -igroup igroup_002 -lun-id 002

# イニシエーターグループの一覧表示
::> lun igroup show
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_001   iscsi    linux    iqn.1994-05.com.redhat:2f3cecbb7216
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_002   iscsi    windows  iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
2 entries were displayed.

# 作成したLUNのマッピングを確認
::> lun show -path /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_002
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          /vol/classmethod_dev_fsx_netapp_ontap_single_az_volume_lun/lun_002
                                          online  mapped   windows    9.05GB

Windows ServerににiSCSI LUNをマウント

OS再起動後、以下の処理を行うための.ps1ファイルを作成します。

  • 各ファイルシステムのiSCSIインターフェイスへの接続
  • iSCSI用のMultipath-IOの追加および設定
  • iSCSI接続ごとに4セッションを確立

テンプレートの.ps1ファイルは以下AWS公式ドキュメントに記載があります。

実際のファイル、コマンドは以下の通りです。

# ヒアドキュメントで処理内容を記述
> $establish_iscsi = @'
#iSCSI IP addresses for Preferred and Standby subnets 
$TargetPortalAddresses = @("10.0.10.96","10.0.10.45") 
                                    
#iSCSI Initator IP Address (Local node IP address) 
$LocaliSCSIAddress = "10.0.0.49" 
                                    
#Connect to FSx for NetApp ONTAP file system 
Foreach ($TargetPortalAddress in $TargetPortalAddresses) { 
  New-IscsiTargetPortal -TargetPortalAddress $TargetPortalAddress 
  -TargetPortalPortNumber 3260 -InitiatorPortalAddress $LocaliSCSIAddress 
} 
                                    
#Add MPIO support for iSCSI 
New-MSDSMSupportedHW -VendorId MSFT2005 -ProductId iSCSIBusType_0x9 
                                    
#Establish iSCSI connection 
1..4 | %{Foreach($TargetPortalAddress in $TargetPortalAddresses){
  Get-IscsiTarget `
    | Connect-IscsiTarget `
        -IsMultipathEnabled $true `
        -TargetPortalAddress $TargetPortalAddress `
        -InitiatorPortalAddress $LocaliSCSIAddress `
        -IsPersistent $true}
} 
                                    
#Set the MPIO Policy to Round Robin 
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR 
'@ 

# ヒアドキュメントの内容をファイルに書き込み
> Write-Output $establish_iscsi `
    | Out-File  "establish_iscsi.ps1"

.ps1ファイル作成後に、作成したファイルを実行します。

> .\establish_iscsi.ps1


InitiatorInstanceName  :
InitiatorPortalAddress :
IsDataDigest           : False
IsHeaderDigest         : False
TargetPortalAddress    : 10.0.10.96
TargetPortalPortNumber : 3260
PSComputerName         :

-TargetPortalPortNumber : 用語 '-TargetPortalPortNumber' は、コマンドレット、関数、スクリプト ファイル、または操作可能なプログラムの名前として認識されま
せん。名前が正しく記述されていることを確認し、パスが含まれている場合はそのパスが正しいことを確認してから、再試行してください。
発生場所 C:\Windows\system32\establish_iscsi.ps1:10 文字:3
+   -TargetPortalPortNumber 3260 -InitiatorPortalAddress $LocaliSCSIAdd ...
+   ~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (-TargetPortalPortNumber:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

InitiatorInstanceName  :
InitiatorPortalAddress :
IsDataDigest           : False
IsHeaderDigest         : False
TargetPortalAddress    : 10.0.10.45
TargetPortalPortNumber : 3260
PSComputerName         :

-TargetPortalPortNumber : 用語 '-TargetPortalPortNumber' は、コマンドレット、関数、スクリプト ファイル、または操作可能なプログラムの名前として認識されま
せん。名前が正しく記述されていることを確認し、パスが含まれている場合はそのパスが正しいことを確認してから、再試行してください。
発生場所 C:\Windows\system32\establish_iscsi.ps1:10 文字:3
+   -TargetPortalPortNumber 3260 -InitiatorPortalAddress $LocaliSCSIAdd ...
+   ~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (-TargetPortalPortNumber:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

ProductId      : iSCSIBusType_0x9
VendorId       : MSFT2005
PSComputerName :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 400001370030
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000021
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1100
PSComputerName          :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 400001370032
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000022
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1100
PSComputerName          :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 400001370034
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000023
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1200
PSComputerName          :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 400001370036
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000024
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1200
PSComputerName          :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 400001370038
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000025
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1300
PSComputerName          :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 40000137003a
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000026
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1300
PSComputerName          :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 40000137003c
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000027
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1400
PSComputerName          :

AuthenticationType      : NONE
InitiatorInstanceName   : ROOT\ISCSIPRT\0000_0
InitiatorNodeAddress    : iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
InitiatorPortalAddress  : 10.0.0.49
InitiatorSideIdentifier : 40000137003e
IsConnected             : True
IsDataDigest            : False
IsDiscovered            : True
IsHeaderDigest          : False
IsPersistent            : True
NumberOfConnections     : 1
SessionIdentifier       : ffffd185619ff010-4000013700000028
TargetNodeAddress       : iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
TargetSideIdentifier    : 1400
PSComputerName          :




PowerShell実行後、RDP接続をしてディスクの管理を確認すると、オフラインのディスクを確認できました。

オフラインのディスクの確認

こちらをマウントします。

まず、オフラインのディスク上で右クリックし、オンラインをクリックします。

ディスク1のオンライン

すると、オフラインから初期化されていませんというメッセージに変わりました。こちらのディスクを初期化してあげます。

ディスクの初期化

パーティションタイプはGPTを選択してOKをクリックします。

パーティションタイプの選択

オンラインと表示されるようになりました。最後にこのディスクにボリュームを作成し、Dドライブに割り当てます。未割り当てとなっている箇所で右クリックをし、新しいシンプルボリュームをクリックします。

新しいシンプルボリューム

新しいシンプルボリュームウィザードが開始されます。次へをクリックします。

新しいシンプルボリュームウィザードの開始

ボリュームのサイズはデフォルトのままで次へをクリックします。

ボリュームサイズの指定

ドライブレターはもデフォルトのDのままで次へをクリックします。

ドライブレターの指定

パーティションのフォーマットもデフォルトの設定のままで次へをクリックします。

パーティションのフォーマット

新しいシンプルボリュームウィザードが完了しました。次へをクリックします。

新しいシンプルボリュームウィザードの完了

すると、未割り当てだったボリュームがDドライブとして割り当てられました。

Dドライブの確認

エクスプローラーからもDドライブを確認できました。

エクスプローラーからDドライブを確認

Dドライブ内に適当なフォルダを作成することもできました。めでたしめでたし。

適当なフォルダの作成

おまけ. OS起動時にiSCSI LUNを自動でマウントしたい場合

おまけです。

OS起動時にiSCSI LUNを自動でマウントしたいですよね?

Windows Serverの場合、OSを再起動してもLUNを指定したドライブとしてマウントしてくれます。

Windows Server再起動後のディスク

Amazon Linux 2の場合はOSを再起動しても自動で再マウントしてくれません。

$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=472592k,nr_inodes=118148,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
/dev/nvme0n1p1 on / type xfs (rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13464)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        462M     0  462M   0% /dev
tmpfs           470M     0  470M   0% /dev/shm
tmpfs           470M  400K  470M   1% /run
tmpfs           470M     0  470M   0% /sys/fs/cgroup
/dev/nvme0n1p1  8.0G  1.6G  6.5G  20% /

iscsidmultipathdは自動起動しており、ディスクとマルチパスデバイスは認識されているようです。

$ sudo systemctl status iscsid -l
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-05-20 00:12:29 UTC; 4min 40s ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 2123 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 2125 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─2124 /usr/sbin/iscsid
           └─2125 /usr/sbin/iscsid

May 20 00:12:29 ip-10-0-0-93.ap-northeast-1.compute.internal systemd[1]: Failed to parse PID from file /var/run/iscsid.pid: Invalid argument
May 20 00:12:29 ip-10-0-0-93.ap-northeast-1.compute.internal systemd[1]: Started Open-iSCSI.
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection1:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection2:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection3:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection4:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection5:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] through [iface: default] is operational now
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection6:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] through [iface: default] is operational now
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection7:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] through [iface: default] is operational now
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal iscsid[2124]: Connection8:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] through [iface: default] is operational now

$ sudo systemctl status multipathd -l
● multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2022-05-20 00:12:28 UTC; 5min ago
  Process: 1593 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 1579 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 1571 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 1599 (multipathd)
   CGroup: /system.slice/multipathd.service
           └─1599 /sbin/multipathd

May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: sdb: add path (uevent)
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: sdb [8:16]: delaying path addition until 3600a09806c574231752b53784865462f is fully initialized
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: 3600a09806c574231752b53784865462f: performing delayed actions
May 20 00:12:30 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: 3600a09806c574231752b53784865462f: load table [0 20971520 multipath 4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handler 0 2 1 service-time 0 4 1 8:48 1 8:32 1 8:0 1 8:16 1 service-time 0 2 1 8:80 1 8:64 1]
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: sdg: add path (uevent)
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: 3600a09806c574231752b53784865462f: load table [0 20971520 multipath 4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handler 0 2 1 service-time 0 4 1 8:48 1 8:32 1 8:0 1 8:16 1 service-time 0 3 1 8:80 1 8:64 1 8:96 1]
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: sdg [8:96]: path added to devmap 3600a09806c574231752b53784865462f
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: sdh: add path (uevent)
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: 3600a09806c574231752b53784865462f: load table [0 20971520 multipath 4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handler 0 2 1 service-time 0 4 1 8:48 1 8:32 1 8:0 1 8:16 1 service-time 0 4 1 8:80 1 8:64 1 8:96 1 8:112 1]
May 20 00:12:31 ip-10-0-0-93.ap-northeast-1.compute.internal multipathd[1599]: sdh [8:112]: path added to devmap 3600a09806c574231752b53784865462f

$ lsblk
NAME                                   MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sda                                      8:0    0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdb                                      8:16   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdc                                      8:32   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdd                                      8:48   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sde                                      8:64   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdf                                      8:80   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdg                                      8:96   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdh                                      8:112  0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
nvme0n1                                259:0    0   8G  0 disk
├─nvme0n1p1                            259:1    0   8G  0 part  /
└─nvme0n1p128                          259:2    0   1M  0 part

$ sudo multipath -ll
3600a09806c574231752b53784865462f dm-0 NETAPP  ,LUN C-Mode
size=10G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=enabled
| |- 3:0:0:1 sdd     8:48  active ready running
| |- 2:0:0:1 sdc     8:32  active ready running
| |- 1:0:0:1 sda     8:0   active ready running
| `- 0:0:0:1 sdb     8:16  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 5:0:0:1 sdf     8:80  active ready running
  |- 4:0:0:1 sde     8:64  active ready running
  |- 6:0:0:1 sdg     8:96  active ready running
  `- 7:0:0:1 sdh     8:112 active ready running

そのため、/etc/fstabを編集してあげればよさそうですね。

# 現在の/etc/fstabの確認
$ cat /etc/fstab
#
UUID=2a7884f1-a23b-49a0-8693-ae82c155e5af     /           xfs    defaults,noatime  1   1

# /etc/fstabの編集
$ sudo vi /etc/fstab

# 編集した内容の確認
$ cat /etc/fstab
#
UUID=2a7884f1-a23b-49a0-8693-ae82c155e5af     /           xfs    defaults,noatime  1   1
/dev/mapper/3600a09806c574231752b53784865462f1 /lun/part1 ext4   defaults,noatime  0   2
/dev/mapper/3600a09806c574231752b53784865462f2 /lun/part2 ext4   defaults,noatime  0   2

/etc/fstab編集後、/etc/fstabの内容を読み込んでマウントできることを確認します。

# /etc/fstabの内容を読み込んでマウント
$ sudo mount -a

# マウントされていることを確認
$ mount | grep ext4
/dev/mapper/3600a09806c574231752b53784865462f1 on /lun/part1 type ext4 (rw,noatime,stripe=16)
/dev/mapper/3600a09806c574231752b53784865462f2 on /lun/part2 type ext4 (rw,noatime,stripe=16)

$ df -h
Filesystem                                      Size  Used Avail Use% Mounted on
devtmpfs                                        462M     0  462M   0% /dev
tmpfs                                           470M     0  470M   0% /dev/shm
tmpfs                                           470M  400K  470M   1% /run
tmpfs                                           470M     0  470M   0% /sys/fs/cgroup
/dev/nvme0n1p1                                  8.0G  1.6G  6.5G  20% /
/dev/mapper/3600a09806c574231752b53784865462f1  2.0G  6.1M  1.8G   1% /lun/part1
/dev/mapper/3600a09806c574231752b53784865462f2  2.9G  9.1M  2.8G   1% /lun/part2

/etc/fstabの内容を読み込んでマウントできることを確認できたので、OSを再起動します。

OS再起動後にEC2インスタンスに接続します。

はい、EC2インスタンスにSSMセッションマネージャーで接続できなくなりました。

シリアルコンソールからEC2インスタンスに接続してみると、デバイス(3600a09806c574231752b53784865462f13600a09806c574231752b53784865462f2)を見つけられなくて泣いているようです。

[ TIME ] Timed out waiting for device dev-ma...c574231752b53784865462f1.device.
[DEPEND] Dependency failed for /lun/part1.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Relabel all filesystems, if necessary.
[DEPEND] Dependency failed for Mark the need to relabel after reboot.
[DEPEND] Dependency failed for Migrate local... structure to the new structure.
[ TIME ] Timed out[   93.824908] RPC: Registered named UNIX socket transport module.
 waiting for device dev-ma...c574231752b53784865462f2.device.
[DEPEND] Dependency failed for /lun/part2.
         Starting Preprocess NFS configuration...
[   93.824909] RPC: Registered udp transport module.
2m  OK  ] Reached target Timers.
[  OK  ] Reached target Login Prompts.
         Starting Initial hibernation setup job...
[  OK  ] Reached target Cloud-init target[   93.824910] RPC: Registered tcp transport module.
.
[  OK  ] Reached target Network (Pre).
[  OK  ] Reached target Network.
         Starting Login and scanning of iSCSI devices...
         Starting Logout off all iS[   93.824910] RPC: Registered tcp NFSv4.1 backchannel transport module.
CSI sessions on shutdown...
         Starting Initial cloud-init job (metadata service crawler)...
[  OK  ] Reached target Sockets.
         Starting Tell Plymouth To Write Out Runtime Data...
[  OK  ] Started Emergency Shell.
[  OK  ] Reached target Emergency Mode.
         Starting Create Volatile Files and Directories...
[  OK  ] Reached target Paths.
[  OK  ] Started Preprocess NFS configuration.
[  OK  ] Started Logout off all iSCSI sessions on shutdown.
[  OK  ] Started Create Volatile Files and Directories.
         Starting Security Auditing Service...
         Starting RPC bind service...
         Mounting RPC Pipe File System...
[  OK  ] Started RPC bind service.
[  OK  ] Mounted RPC Pipe File System.
[  OK  ] Reached target rpc_pipefs.target.
[  OK  ] Reached target NFS client services.
         Starting Relabel kernel modules early in the boot, if needed...
         Starting Activation of DM RAID sets...
[  OK  ] Stopped target Emergency Mode.
[  OK  ] Started Relabel kernel modules early in the boot, if needed.
[  OK  ] Started Activation of DM RAID sets.
[  OK  ] Started Tell Plymouth To Write Out Runtime Data.
[  OK  ] Started Security Auditing Service.
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Update UTMP about System Boot/Shutdown.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.
[   93.620192] hibinit-agent[1733]: Traceback (most recent call last):
[   93.620477] hibinit-agent[1733]: File "/usr/bin/hibinit-agent", line 496, in <module>
[   93.622027] hibinit-agent[1733]: main()
[   93.622223] hibinit-agent[1733]: File "/usr/bin/hibinit-agent", line 435, in main
[   93.624216] hibinit-agent[1733]: if not hibernation_enabled(config.state_dir):
[   93.624482] hibinit-agent[1733]: File "/usr/bin/hibinit-agent", line 390, in hibernation_enabled
[   93.624660] hibinit-agent[1733]: imds_token = get_imds_token()
[   93.624798] hibinit-agent[1733]: File "/usr/bin/hibinit-agent", line 365, in get_imds_token
[   93.624931] hibinit-agent[1733]: response = requests.put(token_url, headers=request_header)
[   93.625066] hibinit-agent[1733]: File "/usr/lib/python2.7/site-packages/requests/api.py", line 121, in put
[   93.625199] hibinit-agent[1733]: return request('put', url, data=data, **kwargs)
[   93.625330] hibinit-agent[1733]: File "/usr/lib/python2.7/site-packages/requests/api.py", line 50, in request
[   93.625484] hibinit-agent[1733]: response = session.request(method=method, url=url, **kwargs)
[   93.625612] hibinit-agent[1733]: File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 486, in request
[   93.629485] hibinit-agent[1733]: resp = self.send(prep, **send_kwargs)
[   93.629680] hibinit-agent[1733]: File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 598, in send
[   93.629927] hibinit-agent[1733]: r = adapter.send(request, **kwargs)
[   93.631822] hibinit-agent[1733]: File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 419, in send
[   93.632074] hibinit-agent[1733]: raise ConnectTimeout(e, request=request)
[   93.633179] hibinit-agent[1733]: requests.exceptions.ConnectTimeout: HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /lat
est/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f0f9268c5d0>: Failed to establish a new connectio
n: [Errno 101] Network is unreachable',))
[FAILED] Failed to start Initial hibernation setup job.
See 'systemctl status hibinit-agent.service' for details.
[   94.031774] cloud-init[1736]: Cloud-init v. 19.3-45.amzn2 running 'init' at Fri, 20 May 2022 01:17:38 +0000. Up 94.00 seconds.
[   94.054658] cloud-init[1736]: ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++
[   94.054899] cloud-init[1736]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[   94.059033] cloud-init[1736]: ci-info: | Device |   Up  |  Address  |    Mask   | Scope |     Hw-Address    |
[   94.060813] cloud-init[1736]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[   94.065010] cloud-init[1736]: ci-info: |  eth0  | False |     .     |     .     |   .   | 06:b3:d2:fc:2d:63 |
[   94.066866] cloud-init[1736]: ci-info: |   lo   |  True | 127.0.0.1 | 255.0.0.0 |  host |         .         |
[   94.067134] cloud-init[1736]: ci-info: |   lo   |  True |  ::1/128  |     .     |  host |         .         |
[   94.067353] cloud-init[1736]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[   94.075820] cloud-init[1736]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
[   94.076083] cloud-init[1736]: ci-info: +-------+-------------+---------+-----------+-------+
[   94.078334] cloud-init[1736]: ci-info: | Route | Destination | Gateway | Interface | Flags |
[   94.078587] cloud-init[1736]: ci-info: +-------+-------------+---------+-----------+-------+
[   94.083026] cloud-init[1736]: ci-info: +-------+-------------+---------+-----------+-------+
[  OK  ] Started Initial cloud-init job (metadata service crawler).
[  OK  ] Reached target Cloud-config availability.
[  OK  ] Reached target Network is Online.
Welcome to emergency mode! After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" or ^D to
try again to boot into default mode.
[  OK  ] Stopped Emergency Shell.

こちらの原因はネットワークがオンラインになる前に接続しようとしているからです。

このデバイスにはiSCSIで接続しているので、接続する際はネットワークがオンラインになっている必要があります。

そのため、/etc/fstabのマウントオプションを変更して、ネットワークがオンラインになった後にマウントしに行くようにしてあげます。

このEC2インスタンスはシリアルコンソールからでも操作を受け付けなくなったので終了(Terminate )して、別のEC2インスタンスで検証します。

新しいEC2インスタンス起動後、OSのiSCSI設定を行います。

# 必要なソフトウェアのインストール
$ sudo yum install device-mapper-multipath iscsi-initiator-utils -y
.
.
(中略)
.
.
Installed:
  device-mapper-multipath.x86_64 0:0.4.9-127.amzn2                                   iscsi-initiator-utils.x86_64 0:6.2.0.874-7.amzn2

Dependency Installed:
  device-mapper-multipath-libs.x86_64 0:0.4.9-127.amzn2                            iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-7.amzn2

Complete!

# タイムアウト値を5に変更
$ sudo sed -i 's/node.session.timeo.replacement_timeout = .*/node.session.timeo.replacement_timeout = 5/' /etc/iscsi/iscsid.conf

# 変更した結果を確認
$ sudo cat /etc/iscsi/iscsid.conf \
    | grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 5

# iSCSIサービスの起動
$ sudo systemctl start iscsid

# iSCSIサービスの起動確認
$ sudo systemctl status iscsid.service
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-05-20 01:30:28 UTC; 5s ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 2736 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 2738 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─2737 /usr/sbin/iscsid
           └─2738 /usr/sbin/iscsid

May 20 01:30:28 ip-10-0-0-162.ap-northeast-1.compute.internal systemd[1]: Starting Open-iSCSI...
May 20 01:30:28 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2736]: iSCSI logger with pid=2737 started!
May 20 01:30:28 ip-10-0-0-162.ap-northeast-1.compute.internal systemd[1]: Failed to parse PID from file /var/run/iscsid.pid: Invalid argument
May 20 01:30:28 ip-10-0-0-162.ap-northeast-1.compute.internal systemd[1]: Started Open-iSCSI.
May 20 01:30:29 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2737]: iSCSI daemon with pid=2738 started!

# マルチパスの設定
$ sudo mpathconf --enable --with_multipathd y

# イニシエーター名の確認
$ sudo cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2cc274f1146

イニシエーター名はインスタンスによって異なります。EC2インスタンスを作り直したので、イニシエーターグループに新しいEC2インスタンスを追加します。

# NetApp ONTAP CLIを使用
$ ssh fsxadmin@management.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com
Password:

Last login time: 5/20/2022 01:29:07

# イニシエーターグループの確認
::> lun igroup show
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_001   iscsi    linux    iqn.1994-05.com.redhat:2f3cecbb7216
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_002   iscsi    windows  iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
2 entries were displayed.

# イニシエーターグループから古いEC2インスタンスを削除
::> lun igroup remove -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -igroup igroup_001 -initiator iqn.1994-05.com.redhat:2f3cecbb7216 -force

# イニシエーターグループの確認
::lun igroup> lun igroup show
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_001   iscsi    linux    -
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_002   iscsi    windows  iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
2 entries were displayed.

# イニシエターグループに新しいEC2インスタンスを追加
::lun igroup> lun igroup add -vserver classmethod-dev-fsx-netapp-ontap-single-az-svm -igroup igroup_001 -initiator iqn.1994-05.com.redhat:2cc274f1146

# イニシエーターグループの確認
::lun igroup> lun igroup show
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_001   iscsi    linux    iqn.1994-05.com.redhat:2cc274f1146
classmethod-dev-fsx-netapp-ontap-single-az-svm
          igroup_002   iscsi    windows  iqn.1991-05.com.microsoft:ec2amaz-69ctq9g
2 entries were displayed.


::lun igroup> exit
Goodbye


Connection to management.fs-0967312eff2f5f5e1.fsx.ap-northeast-1.amazonaws.com closed.

事前準備ができたので新しいEC2インスタンスにLUNを認識させます。

# ターゲットiSCSIノードの検出
$ sudo iscsiadm --mode discovery --op update --type sendtargets --portal 10.0.10.96
10.0.10.96:3260,1029 iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3
10.0.10.45:3260,1030 iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3

# 各AZのONTAPノードごとにイニシエータあたり4つのセッションを確立するよう設定
$ sudo iscsiadm --mode node -T iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3 --op update -n node.session.nr_sessions -v 4

# ターゲットイニシエーターにログイン
$ sudo iscsiadm --mode node -T iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3 --login
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.96,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3, portal: 10.0.10.45,3260] successful.

# 現在のマルチパス設定の確認
$ sudo multipath -ll
3600a09806c574231752b53784865462f dm-0 NETAPP  ,LUN C-Mode
size=10G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=enabled
| |- 0:0:0:1 sda     8:0   active ready running
| |- 1:0:0:1 sdb     8:16  active ready running
| |- 2:0:0:1 sdd     8:48  active ready running
| `- 3:0:0:1 sdc     8:32  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 4:0:0:1 sde     8:64  active ready running
  |- 5:0:0:1 sdf     8:80  active ready running
  |- 6:0:0:1 sdg     8:96  active ready running
  `- 7:0:0:1 sdh     8:112 active ready running

# 現在のブロックデバイスの確認
$ lsblk
NAME                                   MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sda                                      8:0    0  10G  0 disk
├─sda1                                   8:1    0   2G  0 part
├─sda2                                   8:2    0   3G  0 part
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdb                                      8:16   0  10G  0 disk
├─sdb1                                   8:17   0   2G  0 part
├─sdb2                                   8:18   0   3G  0 part
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdc                                      8:32   0  10G  0 disk
├─sdc1                                   8:33   0   2G  0 part
├─sdc2                                   8:34   0   3G  0 part
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdd                                      8:48   0  10G  0 disk
├─sdd1                                   8:49   0   2G  0 part
├─sdd2                                   8:50   0   3G  0 part
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sde                                      8:64   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdf                                      8:80   0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdg                                      8:96   0  10G  0 disk
├─sdg1                                   8:97   0   2G  0 part
├─sdg2                                   8:98   0   3G  0 part
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
sdh                                      8:112  0  10G  0 disk
└─3600a09806c574231752b53784865462f    253:0    0  10G  0 mpath
  ├─3600a09806c574231752b53784865462f1 253:1    0   2G  0 part
  └─3600a09806c574231752b53784865462f2 253:2    0   3G  0 part
nvme0n1                                259:0    0   8G  0 disk
├─nvme0n1p1                            259:1    0   8G  0 part  /
└─nvme0n1p128                          259:2    0   1M  0 part

パーティションはそのまま認識できたようです。マウントポイントを作成してマウントします。

# マウントポイントの作成
$ sudo mkdir -p /lun/part1
$ sudo mkdir /lun/part2

# マウントポイントの確認
$ ls -l /lun
total 0
drwxr-xr-x 2 root root 6 May 20 01:44 part1
drwxr-xr-x 2 root root 6 May 20 01:44 part2

# マウント
$ sudo mount -t ext4 /dev/mapper/3600a09806c574231752b53784865462f1 /lun/part1
$ sudo mount -t ext4 /dev/mapper/3600a09806c574231752b53784865462f2 /lun/part2

# ディスクの空き容量の確認
$ df -hT
Filesystem                                     Type      Size  Used Avail Use% Mounted on
devtmpfs                                       devtmpfs  462M     0  462M   0% /dev
tmpfs                                          tmpfs     470M     0  470M   0% /dev/shm
tmpfs                                          tmpfs     470M  440K  470M   1% /run
tmpfs                                          tmpfs     470M     0  470M   0% /sys/fs/cgroup
/dev/nvme0n1p1                                 xfs       8.0G  1.6G  6.5G  20% /
/dev/mapper/3600a09806c574231752b53784865462f1 ext4      2.0G  6.1M  1.8G   1% /lun/part1
/dev/mapper/3600a09806c574231752b53784865462f2 ext4      2.9G  9.1M  2.8G   1% /lun/part2

# ディレクトリの所有者の変更
$ sudo chown ssm-user:ssm-user /lun/part1
$ sudo chown ssm-user:ssm-user /lun/part2

# ディレクトリの所有者確認
$ ls -l /lun
total 8
drwxr-xr-x 3 ssm-user ssm-user 4096 May 19 02:41 part1
drwxr-xr-x 3 ssm-user ssm-user 4096 May 19 02:41 part2

# 書き込み
$ echo 'Hello world!' > /lun/part1/HelloWorld.txt
$ echo 'Hello world!' > /lun/part2/HelloWorld.txt

# 書き込んだ内容を確認
$ cat /lun/part1/HelloWorld.txt
Hello world!

$ cat /lun/part2/HelloWorld.txt
Hello world!

マウントできたようですね。

一旦、OSを再起動します。ついでに作成したマウントするLUNのUUIDを確認しておきます。

# UUIDの確認
$ sudo blkid
/dev/nvme0n1: PTUUID="30b4269a-8501-4012-b8ae-5dbf6cbbaca8" PTTYPE="gpt"
/dev/nvme0n1p1: LABEL="/" UUID="2a7884f1-a23b-49a0-8693-ae82c155e5af" TYPE="xfs" PARTLABEL="Linux" PARTUUID="4d1e3134-c9e4-456d-a253-374c91394e99"
/dev/nvme0n1p128: PARTLABEL="BIOS Boot Partition" PARTUUID="c31217ab-49a8-4c94-a774-32a6564a79f5"
/dev/sda1: UUID="b770de9f-51f5-49e9-84b1-3f9188625e52" TYPE="ext4" PARTUUID="5060b99a-01"
/dev/sda2: UUID="ce7b791c-7a9d-4f77-acfa-285ce3c2e229" TYPE="ext4" PARTUUID="5060b99a-02"
/dev/sdb1: UUID="b770de9f-51f5-49e9-84b1-3f9188625e52" TYPE="ext4" PARTUUID="5060b99a-01"
/dev/sdb2: UUID="ce7b791c-7a9d-4f77-acfa-285ce3c2e229" TYPE="ext4" PARTUUID="5060b99a-02"
/dev/sdc1: UUID="b770de9f-51f5-49e9-84b1-3f9188625e52" TYPE="ext4" PARTUUID="5060b99a-01"
/dev/sdc2: UUID="ce7b791c-7a9d-4f77-acfa-285ce3c2e229" TYPE="ext4" PARTUUID="5060b99a-02"
/dev/sdd1: UUID="b770de9f-51f5-49e9-84b1-3f9188625e52" TYPE="ext4" PARTUUID="5060b99a-01"
/dev/sdd2: UUID="ce7b791c-7a9d-4f77-acfa-285ce3c2e229" TYPE="ext4" PARTUUID="5060b99a-02"
/dev/sde: PTUUID="5060b99a" PTTYPE="dos"
/dev/sdf: PTUUID="5060b99a" PTTYPE="dos"
/dev/sdg1: UUID="b770de9f-51f5-49e9-84b1-3f9188625e52" TYPE="ext4" PARTUUID="5060b99a-01"
/dev/sdg2: UUID="ce7b791c-7a9d-4f77-acfa-285ce3c2e229" TYPE="ext4" PARTUUID="5060b99a-02"
/dev/mapper/3600a09806c574231752b53784865462f: PTUUID="5060b99a" PTTYPE="dos"
/dev/mapper/3600a09806c574231752b53784865462f1: UUID="b770de9f-51f5-49e9-84b1-3f9188625e52" TYPE="ext4" PARTUUID="5060b99a-01"
/dev/mapper/3600a09806c574231752b53784865462f2: UUID="ce7b791c-7a9d-4f77-acfa-285ce3c2e229" TYPE="ext4" PARTUUID="5060b99a-02"
/dev/sdh: PTUUID="5060b99a" PTTYPE="dos"

# OSの再起動
$ sudo systemctl reboot
Terminated

OS再起動後、自動でマウントされていないことを確認します。

# マウントされていないことを確認
$ mount | grep ext4

$ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  462M     0  462M   0% /dev
tmpfs          tmpfs     470M     0  470M   0% /dev/shm
tmpfs          tmpfs     470M  404K  470M   1% /run
tmpfs          tmpfs     470M     0  470M   0% /sys/fs/cgroup
/dev/nvme0n1p1 xfs       8.0G  1.6G  6.5G  20% /

iscsidmultipathdが自動起動していることを確認します。

$ sudo systemctl status iscsid -l
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-05-20 01:48:00 UTC; 6min ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 2129 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 2132 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─2131 /usr/sbin/iscsid
           └─2132 /usr/sbin/iscsid

May 20 01:48:00 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: iSCSI daemon with pid=2132 started!
May 20 01:48:00 ip-10-0-0-162.ap-northeast-1.compute.internal systemd[1]: Started Open-iSCSI.
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection4:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection3:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection2:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection1:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.96,3260] through [iface: default] is operational now
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection5:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.45,3260] through [iface: default] is operational now
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection8:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.45,3260] through [iface: default] is operational now
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection7:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.45,3260] through [iface: default] is operational now
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal iscsid[2131]: Connection6:0 to [target: iqn.1992-08.com.netapp:sn.a7e1ed55d70c11ecaeb4877d41bba405:vs.3,portal: 10.0.10.45,3260] through [iface: default] is operational now

$ sudo systemctl status multipathd -l
● multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2022-05-20 01:47:58 UTC; 7min ago
  Process: 1601 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 1595 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 1568 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 1605 (multipathd)
   CGroup: /system.slice/multipathd.service
           └─1605 /sbin/multipathd

May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sdc: add path (uevent)
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sdc [8:32]: delaying path addition until 3600a09806c574231752b53784865462f is fully initialized
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sde: add path (uevent)
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sde [8:64]: delaying path addition until 3600a09806c574231752b53784865462f is fully initialized
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sdh: add path (uevent)
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sdh [8:112]: delaying path addition until 3600a09806c574231752b53784865462f is fully initialized
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sdd: add path (uevent)
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: sdd [8:48]: delaying path addition until 3600a09806c574231752b53784865462f is fully initialized
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: 3600a09806c574231752b53784865462f: performing delayed actions
May 20 01:48:01 ip-10-0-0-162.ap-northeast-1.compute.internal multipathd[1605]: 3600a09806c574231752b53784865462f: load table [0 20971520 multipath 4 queue_if_no_pathpg_init_retries 50 retain_attached_hw_handler 0 2 1 service-time 0 4 1 8:0 1 8:16 1 8:32 1 8:48 1 service-time 0 4 1 8:96 1 8:80 1 8:64 1 8:112 1]

それでは、/etc/fstabの設定リベンジです。

事前にマウントオプションをmanで確認します。

$ man mount | cat
MOUNT(8)                                                               System Administration                                                              MOUNT(8)



NAME
       mount - mount a filesystem

.
.
(中略)
.
.
FILESYSTEM-INDEPENDENT MOUNT OPTIONS
       Some of these options are only useful when they appear in the /etc/fstab file.

       Some  of  these  options could be enabled or disabled by default in the system kernel.  To check the current setting see the options in /proc/mounts.  Note
       that filesystems also have per-filesystem specific default mount options (see for example tune2fs -l output for extN filesystems).

       The following options apply to any filesystem that is being mounted (but not every filesystem actually honors them – e.g., the sync  option  today  has  an
       effect only for ext2, ext3, fat, vfat and ufs):


       async  All I/O to the filesystem should be done asynchronously.  (See also the sync option.)

       atime  Do  not  use the noatime feature, so the inode access time is controlled by kernel defaults.  See also the descriptions of the relatime and stricta‐
              time mount options.

       noatime
              Do not update inode access times on this filesystem (e.g. for faster access on the news spool to speed up news servers).  This works for  all  inode
              types (directories too), so it implies nodiratime.

       auto   Can be mounted with the -a option.

       noauto Can only be mounted explicitly (i.e., the -a option will not cause the filesystem to be mounted).

       context=context, fscontext=context, defcontext=context, and rootcontext=context
              The  context= option is useful when mounting filesystems that do not support extended attributes, such as a floppy or hard disk formatted with VFAT,
              or systems that are not normally running under SELinux, such as an ext3 formatted disk from a non-SELinux workstation.  You can also use context= on
              filesystems  you  do  not  trust, such as a floppy.  It also helps in compatibility with xattr-supporting filesystems on earlier 2.4.<x> kernel ver‐
              sions.  Even where xattrs are supported, you can save time not having to label every file by assigning the entire disk one security context.

              A commonly used option for removable media is context="system_u:object_r:removable_t".

              Two other options are fscontext= and defcontext=, both of which are mutually exclusive of the context option.  This means you can use fscontext  and
              defcontext with each other, but neither can be used with context.

              The fscontext= option works for all filesystems, regardless of their xattr support.  The fscontext option sets the overarching filesystem label to a
              specific security context.  This filesystem label is separate from the individual labels on the files.  It represents the entire filesystem for cer‐
              tain  kinds  of  permission  checks,  such as during mount or file creation.  Individual file labels are still obtained from the xattrs on the files
              themselves.  The context option actually sets the aggregate context that fscontext provides, in addition to supplying the same label for  individual
              files.

              You can set the default security context for unlabeled files using defcontext= option.  This overrides the value set for unlabeled files in the pol‐
              icy and requires a filesystem that supports xattr labeling.

              The rootcontext= option allows you to explicitly label the root inode of a FS being mounted before that FS or inode becomes  visible  to  userspace.
              This was found to be useful for things like stateless linux.

              Note that the kernel rejects any remount request that includes the context option, even when unchanged from the current context.

              Warning: the context value might contain commas, in which case the value has to be properly quoted, otherwise mount(8) will interpret the comma as a
              separator between mount options.  Don't forget that the shell strips off quotes and thus double quoting is required.  For example:

                     mount -t tmpfs none /mnt -o \
                       'context="system_u:object_r:tmp_t:s0:c127,c456",noexec'

              For more details, see selinux(8).


       defaults
              Use the default options: rw, suid, dev, exec, auto, nouser, and async.

              Note that the real set of all default mount options depends on kernel and filesystem type.  See the beginning of this section for more details.

       dev    Interpret character or block special devices on the filesystem.

       nodev  Do not interpret character or block special devices on the file system.

       diratime
              Update directory inode access times on this filesystem.  This is the default.  (This option is ignored when noatime is set.)

       nodiratime
              Do not update directory inode access times on this filesystem.  (This option is implied when noatime is set.)

       dirsync
              All directory updates within the filesystem should be done synchronously.  This affects the following system calls: creat,  link,  unlink,  symlink,
              mkdir, rmdir, mknod and rename.

       exec   Permit execution of binaries.

       noexec Do  not  permit  direct execution of any binaries on the mounted filesystem.  (Until recently it was possible to run binaries anyway using a command
              like /lib/ld*.so /mnt/binary.  This trick fails since Linux 2.4.25 / 2.6.0.)

       group  Allow an ordinary user to mount the filesystem if one of that user's groups matches the group of the device.  This option implies the options nosuid
              and nodev (unless overridden by subsequent options, as in the option line group,dev,suid).

       iversion
              Every time the inode is modified, the i_version field will be incremented.

       noiversion
              Do not increment the i_version inode field.

       mand   Allow mandatory locks on this filesystem.  See fcntl(2).

       nomand Do not allow mandatory locks on this filesystem.

       _netdev
              The  filesystem  resides  on  a device that requires network access (used to prevent the system from attempting to mount these filesystems until the
              network has been enabled on the system).

       nofail Do not report errors for this device if it does not exist.

       relatime
              Update inode access times relative to modify or change time.  Access time is only updated if the previous access time was earlier than  the  current
              modify  or  change  time.   (Similar to noatime, but it doesn't break mutt or other applications that need to know if a file has been read since the
              last time it was modified.)

              Since Linux 2.6.30, the kernel defaults to the behavior provided by this option (unless noatime  was  specified),  and  the  strictatime  option  is
              required  to  obtain traditional semantics.  In addition, since Linux 2.6.30, the file's last access time is always updated if it is more than 1 day
              old.

       norelatime
              Do not use the relatime feature.  See also the strictatime mount option.

       strictatime
              Allows to explicitly request full atime updates.  This makes it possible for the kernel to default to relatime or noatime but still allow  userspace
              to override it.  For more details about the default system mount options see /proc/mounts.

       nostrictatime
              Use the kernel's default behavior for inode access time updates.

       lazytime
              Only update times (atime, mtime, ctime) on the in-memory version of the file inode.

              This mount option significantly reduces writes to the inode table for workloads that perform frequent random writes to preallocated files.

              The on-disk timestamps are updated only when:

              - the inode needs to be updated for some change unrelated to file timestamps

              - the application employs fsync(2), syncfs(2), or sync(2)

              - an undeleted inode is evicted from memory

              - more than 24 hours have passed since the i-node was written to disk.


       nolazytime
              Do not use the lazytime feature.

       suid   Allow set-user-ID or set-group-ID bits to take effect.

       nosuid Do not allow set-user-ID or set-group-ID bits to take effect.

       silent Turn on the silent flag.

       loud   Turn off the silent flag.

       owner  Allow  an  ordinary  user to mount the filesystem if that user is the owner of the device.  This option implies the options nosuid and nodev (unless
              overridden by subsequent options, as in the option line owner,dev,suid).

       remount
              Attempt to remount an already-mounted filesystem.  This is commonly used to change the mount flags for a filesystem, especially to make  a  readonly
              filesystem writable.  It does not change device or mount point.

              The  remount  functionality follows the standard way the mount command works with options from fstab.  This means that mount does not read fstab (or
              mtab) only when both device and dir are specified.

                  mount -o remount,rw /dev/foo /dir

              After this call all old mount options are replaced and arbitrary stuff from fstab (or mtab) is ignored, except the loop= option which is  internally
              generated and maintained by the mount command.

                  mount -o remount,rw  /dir

              After this call, mount reads fstab and merges these options with the options from the command line (-o).  If no mountpoint is found in fstab, then a
              remount with unspecified source is allowed.

       ro     Mount the filesystem read-only.

       rw     Mount the filesystem read-write.

       sync   All I/O to the filesystem should be done synchronously.  In the case of media with a limited number of write cycles (e.g. some flash  drives),  sync
              may cause life-cycle shortening.

       user   Allow  an  ordinary  user  to  mount  the filesystem.  The name of the mounting user is written to the mtab file (or to the private libmount file in
              /run/mount on systems without a regular mtab) so that this same user can unmount the filesystem again.  This  option  implies  the  options  noexec,
              nosuid, and nodev (unless overridden by subsequent options, as in the option line user,exec,dev,suid).

       nouser Forbid an ordinary user to mount the filesystem.  This is the default; it does not imply any other options.

       users  Allow  any  user  to  mount  and  to unmount the filesystem, even when some other ordinary user mounted it.  This option implies the options noexec,
              nosuid, and nodev (unless overridden by subsequent options, as in the option line users,exec,dev,suid).

       X-*    All options prefixed with "X-" are interpreted as comments or as userspace application-specific options.  These options are not stored in  the  user
              space (e.g. mtab file), nor sent to the mount.type helpers nor to the mount(2) system call.  The suggested format is X-appname.option.

       x-*    The  same  as X-* options, but stored permanently in the user space. It means the options are also available for umount or another operations.  Note
              that maintain mount options in user space is tricky, because it's necessary use libmount based tools and there is no guarantee that the options will
              be always available (for example after a move mount operation or in unshared namespace).

              Note  that before util-linux v2.30 the x-* options have not been maintained by libmount and stored in user space (functionality was the same as have
              X-* now), but due to growing number of use-cases (in initrd, systemd etc.) the functionality have been extended to keep  existing  fstab  configura‐
              tions usable without a change.

       X-mount.mkdir[=mode]
              Allow to make a target directory (mountpoint).  The optional argument mode specifies the filesystem access mode used for mkdir(2) in octal notation.
              The default mode is 0755.  This functionality is supported only for root users.  The option is also supported as  x-mount.mkdir,  this  notation  is
              deprecated for mount.mkdir since v2.30.
.
.
(中略)
.
.
   Mount options for ext4
       The  ext4  filesystem is an advanced level of the ext3 filesystem which incorporates scalability and reliability enhancements for supporting large filesys‐
       tem.

       The options journal_dev, norecovery, noload, data, commit, orlov, oldalloc, [no]user_xattr [no]acl, bsddf, minixdf, debug, errors,  data_err,  grpid,  bsd‐
       groups,  nogrpid  sysvgroups,  resgid, resuid, sb, quota, noquota, grpquota, usrquota usrjquota, grpjquota and jqfmt are backwardly compatible with ext3 or
       ext2.

       journal_checksum
              Enable checksumming of the journal transactions.  This will allow the recovery code in e2fsck and the kernel to detect corruption in the kernel.  It
              is a compatible change and will be ignored by older kernels.

       journal_async_commit
              Commit  block  can  be  written to disk without waiting for descriptor blocks.  If enabled, older kernels cannot mount the device.  This will enable
              'journal_checksum' internally.

       barrier=0 / barrier=1 / barrier / nobarrier
              These mount options have the same effect as in ext3.  The mount options "barrier" and "nobarrier" are added for consistency with  other  ext4  mount
              options.

              The ext4 filesystem enables write barriers by default.

       inode_readahead_blks=n
              This  tuning  parameter  controls the maximum number of inode table blocks that ext4's inode table readahead algorithm will pre-read into the buffer
              cache.  The value must be a power of 2.  The default value is 32 blocks.

       stripe=n
              Number of filesystem blocks that mballoc will try to use for allocation size and alignment.  For RAID5/6 systems this should be the number  of  data
              disks * RAID chunk size in filesystem blocks.

       delalloc
              Deferring block allocation until write-out time.

       nodelalloc
              Disable delayed allocation.  Blocks are allocated when data is copied from user to page cache.

       max_batch_time=usec
              Maximum  amount of time ext4 should wait for additional filesystem operations to be batch together with a synchronous write operation.  Since a syn‐
              chronous write operation is going to force a commit and then a wait for the I/O complete, it doesn't cost much, and can be a huge throughput win, we
              wait for a small amount of time to see if any other transactions can piggyback on the synchronous write.  The algorithm used is designed to automat‐
              ically tune for the speed of the disk, by measuring the amount of time (on average) that it takes to finish committing  a  transaction.   Call  this
              time  the "commit time".  If the time that the transaction has been running is less than the commit time, ext4 will try sleeping for the commit time
              to see if other operations will join the transaction.  The commit time is capped by the max_batch_time, which defaults to  15000 µs  (15 ms).   This
              optimization can be turned off entirely by setting max_batch_time to 0.

       min_batch_time=usec
              This  parameter sets the commit time (as described above) to be at least min_batch_time.  It defaults to zero microseconds.  Increasing this parame‐
              ter may improve the throughput of multi-threaded, synchronous workloads on very fast disks, at the cost of increasing latency.

       journal_ioprio=prio
              The I/O priority (from 0 to 7, where 0 is the highest priority) which should be used for I/O operations submitted  by  kjournald2  during  a  commit
              operation.  This defaults to 3, which is a slightly higher priority than the default I/O priority.

       abort  Simulate the effects of calling ext4_abort() for debugging purposes.  This is normally used while remounting a filesystem which is already mounted.

       auto_da_alloc|noauto_da_alloc
              Many broken applications don't use fsync() when replacing existing files via patterns such as

              fd = open("foo.new")/write(fd,...)/close(fd)/ rename("foo.new", "foo")

              or worse yet

              fd = open("foo", O_TRUNC)/write(fd,...)/close(fd).

              If  auto_da_alloc is enabled, ext4 will detect the replace-via-rename and replace-via-truncate patterns and force that any delayed allocation blocks
              are allocated such that at the next journal commit, in the default data=ordered mode, the data blocks of the new file are forced to disk before  the
              rename()  operation  is committed.  This provides roughly the same level of guarantees as ext3, and avoids the "zero-length" problem that can happen
              when a system crashes before the delayed allocation blocks are forced to disk.

       noinit_itable
              Do not initialize any uninitialized inode table blocks in the background.  This feature may be used by installation CD's so that the install process
              can complete as quickly as possible; the inode table initialization process would then be deferred until the next time the filesystem is mounted.

       init_itable=n
              The  lazy  itable init code will wait n times the number of milliseconds it took to zero out the previous block group's inode table.  This minimizes
              the impact on system performance while the filesystem's inode table is being initialized.

       discard/nodiscard
              Controls whether ext4 should issue discard/TRIM commands to the underlying block device when blocks are freed.  This is useful for SSD  devices  and
              sparse/thinly-provisioned LUNs, but it is off by default until sufficient testing has been done.

       nouid32
              Disables 32-bit UIDs and GIDs.  This is for interoperability with older kernels which only store and expect 16-bit values.

       block_validity/noblock_validity
              This options allows to enables/disables the in-kernel facility for tracking filesystem metadata blocks within internal data structures.  This allows
              multi-block allocator and other routines to quickly locate extents which might overlap with filesystem metadata blocks.  This option is intended for
              debugging purposes and since it negatively affects the performance, it is off by default.

       dioread_lock/dioread_nolock
              Controls  whether  or  not  ext4 should use the DIO read locking.  If the dioread_nolock option is specified ext4 will allocate uninitialized extent
              before buffer write and convert the extent to initialized after IO completes.  This approach allows ext4 code to  avoid  using  inode  mutex,  which
              improves  scalability on high speed storages.  However this does not work with data journaling and dioread_nolock option will be ignored with kernel
              warning.  Note that dioread_nolock code path is only used for extent-based files.  Because of the restrictions this options comprises it is  off  by
              default (e.g. dioread_lock).

       max_dir_size_kb=n
              This limits the size of the directories so that any attempt to expand them beyond the specified limit in kilobytes will cause an ENOSPC error.  This
              is useful in memory-constrained environments, where a very large directory can cause severe performance problems or even provoke the Out  Of  Memory
              killer. (For example, if there is only 512 MB memory available, a 176 MB directory may seriously cramp the system's style.)

       i_version
              Enable 64-bit inode version support.  This option is off by default.

(以下略)

マウントオプションにはdefaultsとネットワークが有効になるまでマウントしないように_netdev、デバイスが見つけられなかった場合もエラーを報告しないようにnofailを付与します。

# 現在の/etc/fstabの確認
$ cat /etc/fstab
#
UUID=2a7884f1-a23b-49a0-8693-ae82c155e5af     /           xfs    defaults,noatime  1   1

# /etc/fstabの編集 
$ sudo vi /etc/fstab

# 編集した内容の確認
$ cat /etc/fstab
#
UUID=2a7884f1-a23b-49a0-8693-ae82c155e5af     /           xfs    defaults,noatime  1   1
UUID=b770de9f-51f5-49e9-84b1-3f9188625e52     /lun/part1  ext4   defaults,_netdev,nofail  0   2
UUID=ce7b791c-7a9d-4f77-acfa-285ce3c2e229     /lun/part2  ext4   defaults,_netdev,nofail  0   2

/etc/fstab編集後、/etc/fstabの内容を読み込んでマウントできることを確認します。

# /etc/fstabの内容を読み込んでマウント
$ sudo mount -a

# マウントされていることを確認
$ mount | grep ext4
/dev/mapper/3600a09806c574231752b53784865462f1 on /lun/part1 type ext4 (rw,relatime,stripe=16,_netdev)
/dev/mapper/3600a09806c574231752b53784865462f2 on /lun/part2 type ext4 (rw,relatime,stripe=16,_netdev)

$ df -hT
Filesystem                                     Type      Size  Used Avail Use% Mounted on
devtmpfs                                       devtmpfs  462M     0  462M   0% /dev
tmpfs                                          tmpfs     470M     0  470M   0% /dev/shm
tmpfs                                          tmpfs     470M  464K  470M   1% /run
tmpfs                                          tmpfs     470M     0  470M   0% /sys/fs/cgroup
/dev/nvme0n1p1                                 xfs       8.0G  1.6G  6.5G  20% /
tmpfs                                          tmpfs      94M     0   94M   0% /run/user/0
/dev/mapper/3600a09806c574231752b53784865462f1 ext4      2.0G  6.1M  1.8G   1% /lun/part1
/dev/mapper/3600a09806c574231752b53784865462f2 ext4      2.9G  9.1M  2.8G   1% /lun/part2

/etc/fstabの内容を読み込んでマウントできることを確認できたので、OSを再起動します。

OS再起動後にEC2インスタンスに接続します。SSMセッションマネージャーで接続でき、マウントされていることも確認できました。

# マウントされていることを確認
$ mount | grep ext4
/dev/mapper/3600a09806c574231752b53784865462f2 on /lun/part2 type ext4 (rw,relatime,stripe=16,_netdev)
/dev/mapper/3600a09806c574231752b53784865462f1 on /lun/part1 type ext4 (rw,relatime,stripe=16,_netdev)

$ df -hT
Filesystem                                     Type      Size  Used Avail Use% Mounted on
devtmpfs                                       devtmpfs  462M     0  462M   0% /dev
tmpfs                                          tmpfs     470M     0  470M   0% /dev/shm
tmpfs                                          tmpfs     470M  412K  470M   1% /run
tmpfs                                          tmpfs     470M     0  470M   0% /sys/fs/cgroup
/dev/nvme0n1p1                                 xfs       8.0G  1.6G  6.5G  20% /
/dev/mapper/3600a09806c574231752b53784865462f2 ext4      2.9G  9.1M  2.8G   1% /lun/part2
/dev/mapper/3600a09806c574231752b53784865462f1 ext4      2.0G  6.1M  1.8G   1% /lun/part1

# 書き込み確認
$ echo 'write test' > /lun/part1/write-test.txt
$ echo 'write test' > /lun/part2/write-test.txt

$ cat /lun/part1/write-test.txt
write test

$ cat /lun/part2/write-test.txt
write test

ブロックストレージとしても使えるすごいサービスだぞ

Amazon FSx for NetApp ONTAPファイルシステム上のiSCSI LUNをAmazon Linux 2とWindows Serverにマウントしてみました。

Multi-AZのブロックストレージを使えるのはかなり嬉しいですね。やはりロマンの塊です。

Amazon FSx for NetApp ONTAPはファイルストレージとしてもブロックストレージとしても使えるかなり万能なサービスということが分かりました。「このボリュームはLUNをホストさせて、他のボリュームはファイルサーバーの領域としても使う」など1つのファイルシステムで複数の役割を持たせることが出来そうです。

なお、LUNの作成などでNetApp ONTAP CLIコマンドを使用しましたが、コマンドリファレンスが充実しているので、特に困ったりはしませんでした。

この記事が誰かの助けになれば幸いです。

以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!

Share this article

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.